
Statistical Learning Theory for Neural Operators
Please login to view abstract download link
In this talk, we present new results on the sample size required to learn surrogates of nonlinear mappings between infinite-dimensional Hilbert spaces. Such surrogate models have a wide range of applications and can be used in uncertainty quantification and parameter estimation problems in fields such as classical mechanics, fluid mechanics, electrodynamics, earth sciences etc. Here, the operator input determines the problem configuration, such as initial conditions, material properties, or forcing terms of a partial differential equation (PDE) governing the underlying physics. The operator output corresponds to the PDE solution. Our analysis shows that, for certain neural network architectures, empirical risk minimization can overcome the curse of dimensionality. Specifically, we show that both the number of network parameters and the quantity of input-output data pairs required for training remain manageable, with the error converging at an algebraic rate.